142 research outputs found

    Vanishing point detection for visual surveillance systems in railway platform environments

    Get PDF
    © 2018 Elsevier B.V. Visual surveillance is of paramount importance in public spaces and especially in train and metro platforms which are particularly susceptible to many types of crime from petty theft to terrorist activity. Image resolution of visual surveillance systems is limited by a trade-off between several requirements such as sensor and lens cost, transmission bandwidth and storage space. When image quality cannot be improved using high-resolution sensors, high-end lenses or IR illumination, the visual surveillance system may need to increase the resolving power of the images by software to provide accurate outputs such as, in our case, vanishing points (VPs). Despite having numerous applications in camera calibration, 3D reconstruction and threat detection, a general method for VP detection has remained elusive. Rather than attempting the infeasible task of VP detection in general scenes, this paper presents a novel method that is fine-tuned to work for railway station environments and is shown to outperform the state-of-the-art for that particular case. In this paper, we propose a three-stage approach to accurately detect the main lines and vanishing points in low-resolution images acquired by visual surveillance systems in indoor and outdoor railway platform environments. First, several frames are used to increase the resolving power through a multi-frame image enhancer. Second, an adaptive edge detection is performed and a novel line clustering algorithm is then applied to determine the parameters of the lines that converge at VPs; this is based on statistics of the detected lines and heuristics about the type of scene. Finally, vanishing points are computed via a voting system to optimize detection in an attempt to omit spurious lines. The proposed approach is very robust since it is not affected by ever-changing illumination and weather conditions of the scene, and it is immune to vibrations. Accurate and reliable vanishing point detection provides very valuable information, which can be used to aid camera calibration, automatic scene understanding, scene segmentation, semantic classification or augmented reality in platform environments

    Present and Future of Fault Tolerant Drives Applied to Transport Applications

    Get PDF
    An electric drive is an electromechanical conversion device, consisting of an electrical machine, a power electronic inverter, which interfaces between the machine and the electrical supply, a set of sensors and a digital electronic controller. Drives of this sort are manufactured in high volumes at power levels ranging from less than 1W to many MW. Reliability of the complete system depends upon the local environment, levels of thermal cycling and predictive maintenance schedules. Overall the drive system has a typical reliability of the order of 10-5 failures per hour, making it much more reliable than, say, an internal combustion engine. As part of the “electrical revolution” electric drives are increasingly being developed for safety critical applications, where their reliability is several orders of magnitude below the application requirements. This is particularly the case in electrical propulsion and actuation systems in aircraft, leading to intensive research into fault tolerant electric drives. This paper will illustrate some of the most common failure mechanisms and the consequences of such failures. It will then progress to examine architectures which are fault tolerant through partitioning of the drive into several independent lanes and examine the penalties of adopting such an approach. The paper will discuss pros and cons of different fault tolerant architectures and suggests future research and development steps that are required to increase the overall safety of electric drives

    Motivational component profiles in university students learning histology: a comparative study between genders and different health science curricula

    Get PDF
    Background: The students' motivation to learn basic sciences in health science curricula is poorly understood. The purpose of this study was to investigate the influence of different components of motivation (intrinsic motivation, self-determination, self-efficacy and extrinsic -career and grade-motivation) on learning human histology in health science curricula and their relationship with the final performance of the students in histology. Methods: Glynn Science Motivation Questionnaire II was used to compare students' motivation components to learn histology in 367 first-year male and female undergraduate students enrolled in medical, dentistry and pharmacy degree programs. Results: For intrinsic motivation, career motivation and self-efficacy, the highest values corresponded to medical students, whereas dentistry students showed the highest values for self-determination and grade motivation. Genders differences were found for career motivation in medicine, self-efficacy in dentistry, and intrinsic motivation, self-determination and grade motivation in pharmacy. Career motivation and self-efficacy components correlated with final performance in histology of the students corresponding to the three curricula. Conclusions: Our results show that the overall motivational profile for learning histology differs among medical, dentistry and pharmacy students. This finding is potentially useful to foster their learning process, because if they are metacognitively aware of their motivation they will be better equipped to self-regulate their science-learning behavior in histology. This information could be useful for instructors and education policy makers to enhance curricula not only on the cognitive component of learning but also to integrate students' levels and types of motivation into the processes of planning, delivery and evaluation of medical education.This research was supported by the Unidad de Innovación Docente, University of Granada, Spain through grants UGR11-294 and UGR11-303

    Did Photosymbiont Bleaching Lead to the Demise of Planktic Foraminifer Morozovella at the Early Eocene Climatic Optimum?

    Get PDF
    The symbiont-bearing mixed-layer planktic foraminiferal genera Morozovella and Acarinina were among the most important calcifiers of early Paleogene tropical–subtropical oceans. A marked and permanent switch in the abundance of these genera is known to have occurred at low-latitude sites at the beginning of the Early Eocene Climatic Optimum(EECO), such that the relative abundance of Morozovella permanently and significantly decreased along with a progressive reduction in the number of species; concomitantly, the genus Acarinina almost doubled its abundance and diversified. Here we examine planktic foraminiferal assemblages and stable isotope compositions of their tests at Ocean Drilling Program Site 1051 (northwest Atlantic) to detail the timing of this biotic event, to document its details at the species level, and to test a potential cause: the loss of photosymbionts (bleaching). We also provide stable isotope measurements of bulk carbonate to refine the stratigraphy at Site 1051 and to determine when changes in Morozovella species composition and their test size occurred. We demonstrate that the switch in Morozovella and Acarinina abundance occurred rapidly and in coincidence with a negative carbon isotope excursion known as the J event (~53 Ma), which marks the start of the EECO.We provide evidence of photosymbiont loss after the J event from a size-restricted δ13C analysis. However, such inferred bleaching was transitory and also occurred in the acarininids. The geologically rapid switch in planktic foraminiferal genera during the early Eocene was a major evolutionary change within marine biota, but loss of photosymbionts was not the primary causal mechanism

    Significant benefits of AIP testing and clinical screening in familial isolated and young-onset pituitary tumors

    Get PDF
    Context Germline mutations in the aryl hydrocarbon receptor-interacting protein (AIP) gene are responsible for a subset of familial isolated pituitary adenoma (FIPA) cases and sporadic pituitary neuroendocrine tumors (PitNETs). Objective To compare prospectively diagnosed AIP mutation-positive (AIPmut) PitNET patients with clinically presenting patients and to compare the clinical characteristics of AIPmut and AIPneg PitNET patients. Design 12-year prospective, observational study. Participants & Setting We studied probands and family members of FIPA kindreds and sporadic patients with disease onset ≤18 years or macroadenomas with onset ≤30 years (n = 1477). This was a collaborative study conducted at referral centers for pituitary diseases. Interventions & Outcome AIP testing and clinical screening for pituitary disease. Comparison of characteristics of prospectively diagnosed (n = 22) vs clinically presenting AIPmut PitNET patients (n = 145), and AIPmut (n = 167) vs AIPneg PitNET patients (n = 1310). Results Prospectively diagnosed AIPmut PitNET patients had smaller lesions with less suprasellar extension or cavernous sinus invasion and required fewer treatments with fewer operations and no radiotherapy compared with clinically presenting cases; there were fewer cases with active disease and hypopituitarism at last follow-up. When comparing AIPmut and AIPneg cases, AIPmut patients were more often males, younger, more often had GH excess, pituitary apoplexy, suprasellar extension, and more patients required multimodal therapy, including radiotherapy. AIPmut patients (n = 136) with GH excess were taller than AIPneg counterparts (n = 650). Conclusions Prospectively diagnosed AIPmut patients show better outcomes than clinically presenting cases, demonstrating the benefits of genetic and clinical screening. AIP-related pituitary disease has a wide spectrum ranging from aggressively growing lesions to stable or indolent disease course

    Global economic burden of unmet surgical need for appendicitis

    Get PDF
    Background: There is a substantial gap in provision of adequate surgical care in many low-and middle-income countries. This study aimed to identify the economic burden of unmet surgical need for the common condition of appendicitis. Methods: Data on the incidence of appendicitis from 170 countries and two different approaches were used to estimate numbers of patients who do not receive surgery: as a fixed proportion of the total unmet surgical need per country (approach 1); and based on country income status (approach 2). Indirect costs with current levels of access and local quality, and those if quality were at the standards of high-income countries, were estimated. A human capital approach was applied, focusing on the economic burden resulting from premature death and absenteeism. Results: Excess mortality was 4185 per 100 000 cases of appendicitis using approach 1 and 3448 per 100 000 using approach 2. The economic burden of continuing current levels of access and local quality was US 92492millionusingapproach1and92 492 million using approach 1 and 73 141 million using approach 2. The economic burden of not providing surgical care to the standards of high-income countries was 95004millionusingapproach1and95 004 million using approach 1 and 75 666 million using approach 2. The largest share of these costs resulted from premature death (97.7 per cent) and lack of access (97.0 per cent) in contrast to lack of quality. Conclusion: For a comparatively non-complex emergency condition such as appendicitis, increasing access to care should be prioritized. Although improving quality of care should not be neglected, increasing provision of care at current standards could reduce societal costs substantially

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe
    corecore